这项工作提出了一个新颖的框架CISFA(对比图像合成和自我监督的特征适应),该框架建立在图像域翻译和无监督的特征适应性上,以进行跨模式生物医学图像分割。与现有作品不同,我们使用单方面的生成模型,并在输入图像的采样贴片和相应的合成图像之间添加加权贴片对比度损失,该图像用作形状约束。此外,我们注意到生成的图像和输入图像共享相似的结构信息,但具有不同的方式。因此,我们在生成的图像和输入图像上强制实施对比损失,以训练分割模型的编码器,以最大程度地减少学到的嵌入空间中成对图像之间的差异。与依靠对抗性学习进行特征适应的现有作品相比,这种方法使编码器能够以更明确的方式学习独立于域的功能。我们对包含腹腔和全心的CT和MRI图像的分割任务进行了广泛评估。实验结果表明,所提出的框架不仅输出了较小的器官形状变形的合成图像,而且还超过了最先进的域适应方法的较大边缘。
translated by 谷歌翻译
众所周知,深度学习方法是渴望数据的,它需要大量标记的样本。不幸的是,大量的交互式样品标记工作极大地阻碍了深度学习方法的应用,尤其是对于需要异质样本的3D建模任务。为了减轻对FA \ c {C} ADS的3D建模的数据注释的工作,本文提出了一种半监督的对抗识别策略,该策略嵌入了逆程序建模中。从纹理LOD-2(详细级别)模型开始,我们使用经典的卷积神经网络来识别来自图像补丁的类型并估算Windows的参数。然后将窗口类型和参数组装到程序语法中。一个简单的程序引擎是在现有的3D建模软件中构建的,产生了细粒的窗户几何形状。为了从一些标记的样品中获得有用的模型,我们利用生成对抗网络以半监督的方式训练特征提取器。对抗训练策略还可以利用未标记的数据,使训练阶段更加稳定。使用公开可用的FA \ c {C} ADE图像数据集的实验表明,在同一网络结构下,提出的培训策略可以提高分类精度的提高约10%,参数估计提高了50%。此外,在针对具有不同fa \ c {c} ADE样式的不同数据测试时,性能提高更为明显。
translated by 谷歌翻译
胸部X射线(CXR)中准确的异常定位可以使各种胸部疾病的临床诊断受益。但是,病变水平的注释只能由经验丰富的放射科医生进行,这是乏味且耗时的,因此很难获得。这种情况导致难以开发CXR的完全监督异常定位系统。在这方面,我们建议通过一个弱半监督的策略来训练CXR异常本地化框架,称为“超越阶级”(PBC),该策略(PBC)使用了少数带有病变级别边界框的完全注释的CXR,并通过广泛的弱化的样品和大量的带有注释的样品。点。这样的点注释设置可以通过边缘注释成本提供弱实例级信息,以实现异常定位。尤其是,我们的PBC背后的核心思想是学习从点注释到边界框的强大而准确的映射,以根据注释点的差异。为此,提出了一个正则化项,即多点的一致性,它驱动模型从相同异常内的不同点注释中生成一致的边界框。此外,还提出了一种被称为对称的一致性的自学,也提出了从弱注释的数据中深入利用有用的信息来实现异常定位。 RSNA和VINDR-CXR数据集的实验结果证明了该方法的有效性。当使用少于20%的盒子级标签进行训练时,与当前的最新方法相比,我们的PBC可以在MAP中提高〜5的改进(即点DETR)。代码可从https://github.com/haozheliu-st/point-beyond-class获得。
translated by 谷歌翻译
多实体点云注册是估计目标点云中源点云实例的多个姿势的问题。解决此问题是具有挑战性的,因为一个实例的嵌入对应关系构成了所有其他实例的异常值。现有方法通常依赖于耗时的假设抽样或具有利用空间一致性的特征,从而导致性能有限。在本文中,我们提出了PointClm,这是一个基于对比的学习构成点云注册的框架。我们首先利用对比度学习来学习投入推定的对应关系的完善的深层表示。然后,基于这些表示形式,我们提出了一个异常的修剪策略和聚类策略,以有效地删除异常值并将其余对应关系分配给正确实例。我们的方法的表现优于合成数据集和真实数据集的最新方法。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译
In this paper, we propose a novel framework dubbed peer learning to deal with the problem of biased scene graph generation (SGG). This framework uses predicate sampling and consensus voting (PSCV) to encourage different peers to learn from each other, improving model diversity and mitigating bias in SGG. To address the heavily long-tailed distribution of predicate classes, we propose to use predicate sampling to divide and conquer this issue. As a result, the model is less biased and makes more balanced predicate predictions. Specifically, one peer may not be sufficiently diverse to discriminate between different levels of predicate distributions. Therefore, we sample the data distribution based on frequency of predicates into sub-distributions, selecting head, body, and tail classes to combine and feed to different peers as complementary predicate knowledge during the training process. The complementary predicate knowledge of these peers is then ensembled utilizing a consensus voting strategy, which simulates a civilized voting process in our society that emphasizes the majority opinion and diminishes the minority opinion. This approach ensures that the learned representations of each peer are optimally adapted to the various data distributions. Extensive experiments on the Visual Genome dataset demonstrate that PSCV outperforms previous methods. We have established a new state-of-the-art (SOTA) on the SGCls task by achieving a mean of \textbf{31.6}.
translated by 谷歌翻译